Frequently asked questions about the GISS model

Frequently asked questions about the GISS model

Here are some frequently asked questions and some standard responses, please feel free to add new questions (and answers!), or amend the response to make it clearer or more complete.

1. General questions

  1. Why has all the coding been moved out of main?
  2. Why is modular code so great anyway?
  3. Why are there three files for every module, why not just one?
  4. How do I know where do find definitions and declarations then?
  5. Why doesn't the model produce 'nice' output as it runs?
  6. Something is wrong with the advection, my tracers are going haywire!
  7. The model crashed in the dynamics near the pole. What should I do?
  8. Why should I switch to using modelE?
  9. Why do I get single precision answers from double precision variables?, and what are all those 'd0' doing?
  10. The model core dumps!

2. FAQs about the Parameter database

  1. What did we do?
  2. Why did we do this?
  3. How was it done?
  4. How does it work?
  5. So what are the actual interfaces?
  6. Any examples?

3. FAQs about Preprocessing instructions

  1. What is preprocessing?
  2. How to set up global preprocessing options in a rundeck
  3. How to use global preprocessing options in a source file
  4. How does it work?
  5. Any recommendations on usage?

General Questions

1) Why has all the coding been moved out of main?

In order to increase the modularity of the code, many tasks that were previously in main and input (input/output, initialisations, diagnostics etc.) have been moved to the relevant physical module. Thus, main should not vary as much between model versions. Also, changes to a module (to the number of prognostic variables, the functionality etc.) should now only concern that module and not the rest of the model. This should make it easier to 'plug and play' new functionality.

2) Why is modular code so great anyway?

So that if you want to change something, you only need to do it in the part of the code directly affected. There should be much less unforseen consequences to seemingly minor changes.

3) Why are there three files for every module, why not just one?

There are generally three logical parts of each module: a common block (that owns the variables on the main model grid and/or saves output from a module that might be used by another part of the code), the local physics (for instance, the pure 1D column physics) and the drivers (the programs that access the local physics routines, do initiallisation etc.). The reason that they are in seperate files is mainly because of the logical dependencies between the main model and module variables. Consider a situation where the module needs information from another module - i.e. there is a USE statement in one of the drivers. Conceivably, that module might USE a variable from this module also. Since we now use an automatically generated list of dependencies in the makefile, this would create a circular dependency if the USE statements and the variable definitions were in the same file. Therefore, at minimum the common block and the drivers must be seperated. We choose generally to make the local physics a seperate file also because it is not usually dependent on anything except the CONSTANT module and possibly the main model common block. Thus it does not need to be recompiled everytime something in the drivers or common block changes. However, sometimes the local physics is in the same file as the COM module.

4) How do I know where do find definitions and declarations then?

The rules are relatively simple (but not yet implemented 100% of the time).

One exception to this is when local parameters are required by another part of the model (i.e. for the input NAMELIST), then the variable might be defined in the local physics module, USE'd by the COM module and only accessed through COM. - Drivers can USE both the COM variables and the local physics parameters.

5) Why doesn't the model produce 'nice' output as it runs?

The GISS GCM output is of the form of accumulation files, and since the accumulated quantities are neither averaged or scaled, looking directly at these 'acc' files is not very illuminating. So it is a fact that the running output is not 'nice'. However there are a number of reasons why we do it this way. Firstly, the accumulated arrays are generally the basic model values. The 'nice' diagnostics that are required are generally a function of a number of these variables. Accumualted arrays and files can be concatenated very easily (and thus averaged trivially) without having to worry about the length of time each array was accumulated (say a different number of days in two different months). The average of derived quantities (such as the albedo) is not the same as the quantity derived from the average (which is usually what is required). Accumulated arrays allow you to do both depending on what is needed. A great many diagnostics can be derived after the fact, without having to have thought about them ahead of time.

6) Something is wrong with the advection, my tracers are going haywire!

This is not a problem in the advection. The GISS dynamical core is unique in that it uses more information than just the mean tracer amount in a particular grid-box. We currently use two flavours of tracer advection, the linear and the quadratic upstream schemes. The linear scheme carries 3 first order moments along with the mean, while the quadratic scheme carries an additional 6 second order moments. These can be thought of as representing the spatial gradients of the tracer concentration at the centre of a gridbox. Thus a higher degree of accuracy for the tracer distribution in the box can be found as a Taylor series expansion in the moments. This is used in the advection routines to accurately move tracer mass from box to box. However, as should be clear, if the moments are incorrect or corrupted in any way, the advection will be compromised, even to the point where negaitve tracer amounts might appear. This is usually what has happened when the model crashes after advecting tracers.

What could cause the the moments to be corrupted? Generally, the coding of the tracers did not consider the effects on the moments of a particular physical process. The most serious omission is not to reduce the moments in the same proportion as the tracer mass is reduced. This omission can leave tracer moments significantly larger than the mean tracer amount, and implies a sub-grid scale tracer profile that is not positive definite. Please read the document Using tracer moments in the GISS GCM for further information on how to use the moments in a useful fashion.

7) The model crashed in the dynamics near the pole. What should I do?

Occasionally (every 15-20 model years), the model will produce very fast velocities in the lower stratosphere near the pole (levels 7 or 8 for the standard layering). This will produce a number of warnings from the advection (such as limitq warning: abs(a)>1) and then finally a crash (limitq error: new sn < 0"). There are a number of things you can do to get past such an error: i) Go back to the last monthly rsf file and use ISTART=4, ii) change DT to a smaller value (like 180 sec), run past the crash for a couple of days, and then increase DT back to normal afterwards.

The second option is more straightforward and can be dealt with automatically (see here for more details). The first option is not guaranteed to work unless the number of hours that have elapsed from the start of the run to the end of the last month are not an integer multiple of NRAD. (This is to ensure that the model will follow a different path). If there is a problem, then going back to the previous months restart file generally works.

Please make a note in the rundeck that this happened, and how you fixed it.

8) Why should I switch to using modelE?

It is of course up to you. However, there are a number of reasons why it makes sense for you to make the effort involved in upgrading.

9) Why do I get single precision answers from double precision variables?, and what are all those 'd0' doing?

All numbers in the GCM should be accurate to the degree of their representation, however, many are not. This mostly stems from the automatic conversion that takes place when a single precision or integer number is converted to a double precision variable. In the following examples the double precision variables will only be accurate to 6 or so decimal places (instead of the 12 or so expected).

      REAL*8 X
      X = 0.1 => 0.10000000149011612
      X = 1/3 => 0. (integer division)
      X = 1./3. => 0.3333333432674408 
      X = 1./3 => 0.3333333432674408 
      X = 0.3333333333333333 => 0.3333333432674408 (!!!!)

To get double precision results you must use 'd' ie. X=0.1d0 or 1d-1 X=1d0/3d0 or 1./3d0 or even 1d0/3

Note that for decimals expressable exactly in binary formulation, there are no problems, ie. X=0.5 is the same as X=5d-1. Where integer division is concerned, the integer is converted to the type of the numerator (I think). Thus 1./IM gives only single precision. REAL*8 :: FIM = IM, BYIM = 1./FIM gives double precision (since the denominator is already double precision).

On some compilers there is a compiler option (such as -r8) that removes these problems, but not all. Hence for maximum portability we are trying to be explicit about writing those 'd0's out.

14) The model core dumps!

This problem can arise from a multitude of causes; only the most common ones unrelated to (heretofore undetected) programming errors can be outlined here.


2. FAQs about the Parameter database.

1) What did we do?

We replaced parameter lists JC and RC and corresponding common blocks with a Parameter Database.

2) Why did we do this?

The following goals were pursued:

3) How was it done?

We created a module which keeps a database of all parameters used in the model and provided a library of interface routines. These interface routines are used to get the values of parameters from the database (or to set new parameters in the database). Rundeck parameters are loaded automatically (with one call in INPUT) into this database when the program starts. This database is saved to restart file each time when restart file is written.

4) How does it work?

For each parameter the database keeps its name as a character string and its value (or list of values for an array). To get a value of a parameter "my_param" and assign it to the variable var one would do:

      call get_param( "my_param", var )

If one wants to create a new parameter in the database with the name "new_par" and set it to the value var one would do:

      call set_param( "new_par", var ) 

It is recommended that you use the same name for the variable and for its name in the database, i.e.:

     call set_param( "new_par", new_par ) 
     call get_param( "my_param", my_param ) 

This will improve the readability of the code.

At the restart first all parameters from the rundeck are loaded to the database with the names as they appear in the rundeck. Then the restart file is read and those parameters which were saved to it and were not present in the rundeck are added to the database. If some parameter was present both in the rundeck and in the restart file then preference will be given to the rundeck and restart file value will be ignored. This is done so that you can always overwrite certain parameters using rundeck.

5) So what are the actual interfaces?

Rundeck: All parameters have been moved out of the namelist block to a new block which starts with &&PARAMETERS and ends with &&END_PARAMETERS This block is being read by a special parser (not a namelist). The syntax of this block is similar to that of the namelist with following restrictions:

Since the parser reads parameters from the rundeck right into the database there is no need to describe those parameters in advance in any part of the code. The drawback is that the parser has to derive the type of the data and the size of arrays from the information provided in the rundeck. The following convention is used:

PARAM library:

The library utilizes Fortran 90 interface blocks in such a way that the type of parameters (i.e. integer, real*8, character) is recognized automatically, so the the names of subroutines are the same for all types. If the dimension of the parameter is given then it is treated as an array. If dimension is omitted then it is a scalar. Beware that Fortran 90 treats scalars and arrays of dimension one differently, so if you confuse them it may generate an error.

The formal arguments in the subroutines below are:

Simple copy routines to copy parameters to/from database:

These functions will check if the parameter is present in the database and will generate an error if you are trying to get parameter which is not in the database or if you are trying to set parameter which is already set. If you really need to overwrite parameter which is already in the database use opt='o'. Otherwise <opt> should be omitted.

Query logical function:

A convenient function which is a combination of those above: sync_param( name, value, dim ) - literally consists of the following lines:

      if( is_set_param( name ) ) then
        get_param( name, value, dim ) 
      else
        set_param( name, value, dim )
      endif

So what it does, it checks if parameter <name> is in database and if so copies it to the <value>. Otherwise it leaves <value> unchanged and copies it to the database with the name <name>. This function is provided as a convenient tool, since this is what you will do most often. At the restart each module will check if certain parameters were provided in the rundeck or in the restart file (i.e. they are in the database already) and use those values. If not then it will use default values and will also copy them to the database so that they are saved to the restart file for future reference.

Subroutines to work with pointers:

alloc_param( name, pvalue, initval, dim ) - allocates space in the database for the parameter <name>, fills it with data provided in <initval> and returns pointer to it in <pvalue>

get_pparam( name, pvalue, dim ) - returns pointer <pvalue> to the parameter data of the parameter <name> in the database

These functions are provided as an additional tool and actually are not needed if parameters in the database are treated as "parameters", i.e. they are set only once at the beginning of the run and then used as constants. But it appeared that some data which was in "parameters" common block was changed during the run (some actually quite often and in different places). To keep the parameter database up-to-date with those changes one would have to call "set_param" each time after the data is changed which is rather awkward. Instead one can use pointers to the parameter data in the database. Pointers in Fortran 90 are nothing else but aliases of the objects they point to. So if you get a pointer to a parameter from the database and then work with it as if it were a regular variable, you will automatically update it in the database and there will be no need to call "set_param". So this subroutines can be used if one wants to keep a dynamic variable in the database (so that it is automatically saved to the restart file).

Reading/writing subroutines:

read_param( kunit, ovrwrt ) - reads the parameter database from the unit <kunit>

write_param( kunit ) - writes the parameter database to the unit <kunit>

The arguments:

Other useful subroutines:

print_param( kunit ) - does formatted output to the unit <kunit> in a way similar to namelist

The argument:

query_param( n, name, dim, ptype ) - returns the the information about the parameter by its number in the database <n>. It returns 'EMPTY' in the <name> if parameter with such <n> doesn't exist (i.e. if <n> is bigger then the number of parameters)

The name is always converted to the low case. It returns 'EMPTY' (upper case) if <n> doesn't correspond to any parameter

Parser subroutine:

parse_params( kunit ) - parse the information in the file <kunit> and load the parameters into the database. It overwrites the existing parameters.

The argument:

This subroutine is actually located in a separate module PARSER, but it seemed logical to describe it here.

6) Any examples?

Here is an example of typical usage of "query_param". One should keep in mind that though "get_param" is a generic interface one should call it with the arguments of correct type to extract the information. That's why one needs to use "select case" below.

      subroutine ex_param 
! this is an example subroutine which shows how to loop
! over all parameters in the database
! it does the same thing as print_param
      USE PARAM integer, parameter :: MAXDIM=64
      character*32 name
      integer n, dim
      character*1 ptype
      integer ic(MAXDIM)
      real*8 rc(MAXDIM)
      character*16 cc(MAXDIM)
      n = 1
      print *, 'printing parameter database'
      do call query_param( n, name, dim, ptype )
        if (name == 'EMPTY' ) exit
        if ( dim > MAXDIM ) then
          print *, 'dim of param ',name,' is > MAXDIM'
          stop 'MAXDIM too small'
        endif
        select case( ptype )
        case ('i') !integer
          call get_param( name, ic, dim )
          print *, name, ( ic(i), i=1, dim )
        case ('r') !real
          call get_param( name, rc, dim )
          print *, name, ( rc(i), i=1, dim )
        case ('c') !character
          call get_param( name, cc, dim )
          print *, name, (cc(i), i=1, dim )
        end select
        n = n + 1
      enddo
      end subroutine ex_param

3. FAQs about Preprocessing instructions.

1) What is preprocessing?

The preprocessor is a program that runs before the actual compiler starts and does certain editing to the source code according to preprocessing instructions. All preprocessing instructions start with # in the first column. The most typical example of preprocessor usage in Fortran code would be:

      ...............
      some fortran code
      ..............
#ifdef OPTION_A
      ...............
      fortran code specific for OPTION_A
      ..............
#endif
      ..............
      more fortran code
      ..............

In the above example the code between #ifdef OPTION_A and #endif will be taken into account only if the name OPTION_A was defined (with the instruction #define OPTION_A) somewhere earlier in the file. Otherwise it will be treated as commented out. So preprocessor allows you to optionally include/exclude certain parts of the code. There are also other useful preprocessing commands, like name substitution. Though with name substitutions one should be very careful in fixed format Fortran, since it is easy to create a line which after substitutions will be longer then 72 characters. The following is a list of typical preprocessing instructions: #define #undef #include #ifdef #ifndef #if #else #endif SGI manual pages provide good explanations to those instructions. On SGI type man cpp.

2) How to set up global preprocessing options in a rundeck?

Your rundeck may contain an optional block which starts with a line Preprocessor Options and ends with a line End Preprocessor Options. Everything between those lines is treated as a set of preprocessing definitions and will be included into corresponding source files. Here is a simple example of preprocessing block:

Preprocessor Options
#define TRACERS_ON ! include tracers code
End Preprocessor Options

It defines the name TRACERS_ON. The text which starts from '!' is considered as comment and is ignored. Trailing spaces and empty lines are also ignored.

3) How to use global preprocessing options in a source file?

If you want to use global preprocessing options in a certain source file you should include a line:

#include "rundeck_opts.h"

at the very beginning of such file. Otherwise all global preprocessing names will stay undefined (the compiler will not give any warnings about this). A typical use of a preprocessing option would be:

#include "rundeck_opts.h"
      .......... some code
#ifdef TRACERS_ON
      some tracers code here
#endif
      some code

The code between #ifdef TRACERS_ON and #endif will be included only when global name TRACERS_ON is defined in a rundeck. Otherwise this code will be ignored by the compiler.

4) How does it work?

When the Makefile starts to compile the model it reads the global preprocessing options block from the rundeck and compares it to the contents of the file rundeck_opts.h. If they are identical the file rundeck_opts.h is left unchanged. Otherwise it is overwritten with the new preprocessing block. File rundeck_opts.h should be included into each source file which wants to use global preprocessing options. This is done by putting a line

#include "rundeck_opts.h"

in the beginning of the source file. Note that one should use CPP include (i.e. #include, which starts from the first position), and not the Fortran include. When the Makefile checks dependencies it takes into account dependency on rundeck_opts.h, so that when rundeck_opts.h is changed all files which #include it are automatically recompiled.

5) Any recommendations on usage?

There is an understanding that global preprocessing options should be used only when there is no other convenient way to reach the same goal. One should keep in mind that once the global preprocessing block in a rundeck is changed, all files that include rundeck_opts.h will be recompiled which most probably will force the recompile of the entire model. So one should limit such options to those which would not change too often from rundeck to rundeck. This functionality is introduced mainly for the options which are global (i.e. used in many source files at the same time) and which need to be documented (that's why this block is in a rundeck). Typical example would be an option which controls inclusion of tracers code into the model (as in example above). There is also a general convention that the names defined by preprocessor are constructed of capital letters only with '_' introduced for readability, like: USE_OPTION_A . I suggest that we observe it. This helps to distinguish them from the rest of the code which is usually in lower case. And yes, names used by preprocessor are case-sensitive (unlike names in Fortran).